by Laurel Duggan
OpenAI, the company behind the headline-grabbing artificial intelligence chatbot ChatGPT, has an automated content moderation system designed to flag hateful speech, but the software treats speech differently depending on which demographic groups are insulted, according to a study conducted by research scientist David Rozado.
The content moderation system used in ChatGPT and other OpenAI products is designed to detect and block hate, threats, self-harm and sexual comments about minors, according to Rozado. The researcher fed various prompts to ChatGPT involving negative adjectives ascribed to various demographic groups based on race, gender, religion and various other markers and found that the software favors some demographic groups over others.
The software was far more likely to flag negative comments about Democrats compared to those about Republicans, and was more likely to allow hateful comments about conservatives than liberals, according to Rozado. Negative comments about women were more likely to be flagged than negative comments about men.
“The ratings partially resemble left-leaning political orientation hierarchies of perceived vulnerability,” Rozado wrote. “That is, individuals of left-leaning political orientation are more likely to perceive some minority groups as disadvantaged and in need of preferential treatment to overcome said disadvantage.”
Negative comments about people who are disabled, gay, transgender, Asian, black or Muslim were most likely to be flagged as hateful by the OpenAI content moderation system, ranking far above Christians, Mormons, thin people and various other groups, according to Rozado. Wealthy people, Republicans, upper-middle and middle-class people and university graduates were at the bottom of the list.
The discovery comes amid growing concern about left-wing bias in OpenAI products including ChatGPT, which favors left-leaning talking points including, in some instances, outright falsehoods, according to a Daily Caller News Foundation investigation.
Brian Chau, a mathematician who frequently writes about OpenAI, warned in a December UnHerd article that bias in artificial intelligence could leak out into important domains of government and public life as the technology is widely adopted.
“’Specific’ artificial intelligence, or paper-pushing at scale, offers a single change to cheaply rewrite the bureaucratic processes governing large corporations, state and federal government agencies, NGOs and media outlets,” he wrote. “In the right hands, it can be used to eliminate political biases endemic to the hiring processes of these organisations. In the wrong hands, it may permanently catechise a particular ideology.”
Rozado and OpenAI did not respond to the DCNF’s requests for comment.
– – –
Laurel Duggan is a reporter at Daily Caller News Foundation.
Photo “Man on Phone” by cottonbro studio.